27 research outputs found

    Deep Expander Networks: Efficient Deep Networks from Graph Theory

    Full text link
    Efficient CNN designs like ResNets and DenseNet were proposed to improve accuracy vs efficiency trade-offs. They essentially increased the connectivity, allowing efficient information flow across layers. Inspired by these techniques, we propose to model connections between filters of a CNN using graphs which are simultaneously sparse and well connected. Sparsity results in efficiency while well connectedness can preserve the expressive power of the CNNs. We use a well-studied class of graphs from theoretical computer science that satisfies these properties known as Expander graphs. Expander graphs are used to model connections between filters in CNNs to design networks called X-Nets. We present two guarantees on the connectivity of X-Nets: Each node influences every node in a layer in logarithmic steps, and the number of paths between two sets of nodes is proportional to the product of their sizes. We also propose efficient training and inference algorithms, making it possible to train deeper and wider X-Nets effectively. Expander based models give a 4% improvement in accuracy on MobileNet over grouped convolutions, a popular technique, which has the same sparsity but worse connectivity. X-Nets give better performance trade-offs than the original ResNet and DenseNet-BC architectures. We achieve model sizes comparable to state-of-the-art pruning techniques using our simple architecture design, without any pruning. We hope that this work motivates other approaches to utilize results from graph theory to develop efficient network architectures.Comment: ECCV'1

    Formalizing Data Deletion in the Context of the Right to be Forgotten

    Get PDF
    The right of an individual to request the deletion of their personal data by an entity that might be storing it -- referred to as the right to be forgotten -- has been explicitly recognized, legislated, and exercised in several jurisdictions across the world, including the European Union, Argentina, and California. However, much of the discussion surrounding this right offers only an intuitive notion of what it means for it to be fulfilled -- of what it means for such personal data to be deleted. In this work, we provide a formal definitional framework for the right to be forgotten using tools and paradigms from cryptography. In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals' data when a request is made of it to delete some of this data. Our framework captures several, though not all, relevant aspects of typical systems involved in data processing. While it cannot be viewed as expressing the statements of current laws (especially since these are rather vague in this respect), our work offers technically precise definitions that represent possibilities for what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require. Finally, with the goal of demonstrating the applicability of our framework and definitions, we consider various natural and simple scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at implementing systems offering deletion guarantees, and also describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities

    Efficient and Provable White-Box Primitives

    Get PDF
    International audienceIn recent years there have been several attempts to build white-box block ciphers whose implementations aim to be incompress-ible. This includes the weak white-box ASASA construction by Bouil-laguet, Biryukov and Khovratovich from Asiacrypt 2014, and the recent space-hard construction by Bogdanov and Isobe from CCS 2015. In this article we propose the first constructions aiming at the same goal while offering provable security guarantees. Moreover we propose concrete instantiations of our constructions, which prove to be quite efficient and competitive with prior work. Thus provable security comes with a surprisingly low overhead

    Multi Collision Resistant Hash Functions and their Applications

    Get PDF
    Collision resistant hash functions are functions that shrink their input, but for which it is computationally infeasible to find a collision, namely two strings that hash to the same value (although collisions are abundant). In this work we study multi-collision resistant hash functions (MCRH) a natural relaxation of collision resistant hash functions in which it is difficult to find a t-way collision (i.e., t strings that hash to the same value) although finding (t-1)-way collisions could be easy. We show the following: 1. The existence of MCRH follows from the average case hardness of a variant of the Entropy Approximation problem. The goal in the entropy approximation problem (Goldreich, Sahai and Vadhan, CRYPTO \u2799) is to distinguish circuits whose output distribution has high entropy from those having low entropy. 2. MCRH imply the existence of constant-round statistically hiding (and computationally binding) commitment schemes. As a corollary, using a result of Haitner et-al (SICOMP, 2015), we obtain a blackbox separation of MCRH from any one-way permutation

    Masking Fuzzy-Searchable Public Databases

    Get PDF
    We introduce and study the notion of keyless fuzzy search (KlFS) which allows to mask a publicly available database in such a way that any third party can retrieve content if and only if it possesses some data that is “close to” the encrypted data – no cryptographic keys are involved. We devise a formal security model that asks a scheme not to leak any information about the data and the queries except for some well-defined leakage function if attackers cannot guess the right query to make. In particular, our definition implies that recovering high entropy data protected with a KlFS scheme is costly. We propose two KlFS schemes: both use locality-sensitive hashes (LSH), cryptographic hashes and symmetric encryption as building blocks. The first scheme is generic and works for abstract plaintext domains. The second scheme is specifically suited for databases of images. To demonstrate the feasibility of our KlFS for images, we implemented and evaluated a prototype system that supports image search by object similarity on a masked database

    Counting independent sets in graphs with bounded bipartite pathwidth

    Get PDF
    The Glauber dynamics can efficiently sample independent sets almost uniformly at random in polynomial time for graphs in a certain class. The class is determined by boundedness of a new graph parameter called bipartite pathwidth. This result, which we prove for the more general hardcore distribution with fugacity λ, can be viewed as a strong generalisation of Jerrum and Sinclair’s work on approximately counting matchings. The class of graphs with bounded bipartite path-width includes line graphs and claw-free graphs, which generalise line graphs. We consider two further generalisations of claw-free graphs and prove that these classes have bounded bipartite pathwidth

    Super-Linear Time-Memory Trade-Offs for Symmetric Encryption

    Get PDF
    We build symmetric encryption schemes from a pseudorandom function/permutation with domain size NN which have very high security -- in terms of the amount of messages qq they can securely encrypt -- assuming the adversary has S<NS < N bits of memory. We aim to minimize the number of calls kk we make to the underlying primitive to achieve a certain qq, or equivalently, to maximize the achievable qq for a given kk. We target in particular q≫Nq \gg N, in contrast to recent works (Jaeger and Tessaro, EUROCRYPT \u2719; Dinur, EUROCRYPT \u2720) which aim to beat the birthday barrier with one call when S<NS < \sqrt{N}. Our first result gives new and explicit bounds for the Sample-then-Extract paradigm by Tessaro and Thiruvengadam (TCC \u2718). We show instantiations for which q=Ω((N/S)k)q =\Omega((N/S)^{k}). If S<N1−αS < N^{1- \alpha}, Thiruvengadam and Tessaro\u27s weaker bounds only guarantee q>Nq > N when k=Ω(log⁥N)k = \Omega(\log N). In contrast, here, we show this is true already for k=O(1/α)k = O(1/\alpha). We also consider a scheme by Bellare, Goldreich and Krawczyk (CRYPTO \u2799) which evaluates the primitive on kk independent random strings, and masks the message with the XOR of the outputs. Here, we show q=Ω((N/S)k/2)q= \Omega((N/S)^{k/2}), using new combinatorial bounds on the list-decodability of XOR codes which are of independent interest. We also study best-possible attacks against this construction

    Fine-Grained Cryptography Revisited

    Get PDF
    Fine-grained cryptographic primitives are secure against adversaries with bounded resources and can be computed by honest users with less resources than the adversaries. In this paper, we revisit the results by Degwekar, Vaikuntanathan, and Vasudevan in Crypto 2016 on fine-grained cryptography and show constructions of three key fundamental fine-grained cryptographic primitives: one-way permutations, hash proof systems (which in turn implies a public-key encryption scheme against chosen chiphertext attacks), and trapdoor one-way functions. All of our constructions are computable in NC1\mathsf{NC}^1 and secure against (non-uniform) NC1\mathsf{NC}^1 circuits under the widely believed worst-case assumption NC1⊊⊕L/poly\mathsf{NC}^1 \subsetneq \oplus \mathsf{L/poly}

    Tinted, Detached, and Lazy CNF-XOR Solving and Its Applications to Counting and Sampling

    No full text
    Given a Boolean formula, the problem of counting seeks to estimate the number of solutions of F while the problem of uniform sampling seeks to sample solutions uniformly at random. Counting and uniform sampling are fundamental problems in computer science with a wide range of applications ranging from constrained random simulation, probabilistic inference to network reliability and beyond. The past few years have witnessed the rise of hashing-based approaches that use XOR-based hashing and employ SAT solvers to solve the resulting CNF formulas conjuncted with XOR constraints. Since over 99% of the runtime of hashing-based techniques is spent inside the SAT queries, improving CNF-XOR solvers has emerged as a key challenge. In this paper, we identify the key performance bottlenecks in the recently proposed architecture, and we focus on overcoming these bottlenecks by accelerating the XOR handling within the SAT solver and on improving the solver integration through a smarter use of (partial) solutions. We integrate the resulting system, called, with the state of the art approximate model counter, and the state of the art almost-uniform model sampler. Through an extensive evaluation over a large benchmark set of over 1896 instances, we observe that leads to consistent speed up for both counting and sampling, and in particular, we solve 77 and 51 more instances for counting and sampling respectively

    Quantum Security of NMAC and Related Constructions: PRF Domain Extension Against Quantum attacks

    Get PDF
    We prove the security of NMAC, HMAC, AMAC, and the cascade construction with fixed input-length as quantum-secure pseudo-random functions (PRFs). Namely, they are indistinguishable from a random oracle against any polynomial-time quantum adversary that can make quantum superposition queries. In contrast, many blockcipher-based PRFs including CBC-MAC were recently broken by quantum superposition attacks. Classical proof strategies for these constructions do not generalize to the quantum setting, and we observe that they sometimes even fail completely (e.g., the universal-hash then PRF paradigm for proving security of NMAC). Instead, we propose a direct hybrid argument as a new proof strategy (both classically and quantumly). We first show that a quantum-secure PRF is secure against key-recovery attacks, and remains secure under random leakage of the key. Next, as a key technical tool, we extend the oracle indistinguishability framework of Zhandry in two directions: we consider distributions on functions rather than strings, and we also consider a relative setting, where an additional oracle, possibly correlated with the distributions, is given to the adversary as well. This enables a hybrid argument to prove the security of NMAC. Security proofs for other constructions follow similarly
    corecore